Gradient Sampling Methods with Inexact Subproblem Solutions and Gradient Aggregation

نویسندگان

چکیده

Gradient sampling (GS) methods for the minimization of objective functions that may be nonconvex and/or nonsmooth are proposed, analyzed, and tested. One most computationally expensive components contemporary GS is need to solve a convex quadratic subproblem in each iteration. By contrast, proposed this paper allow use inexact solutions these subproblems, which, as proved paper, can incorporated without loss theoretical convergence guarantees. Numerical experiments show that, by exploiting solutions, one consistently reduce computational effort required method. Additionally, strategy aggregating gradient information after solved (potentially inexactly) has been exploited bundle optimization. It aggregation scheme introduced incorporating approach also

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation

The Levenberg-Marquardt algorithm is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this paper the extension of the classical Levenberg-Marquardt algorithm to the scenarios where the linearized least squares subproblems are solved inexactly and/or the gradient model is noisy and acc...

متن کامل

Intermediate Gradient Methods for Smooth Convex Problems with Inexact Oracle

Between the robust but slow (primal or dual) gradient methods and the fast but sensitive to errors fast gradient methods, our goal in this paper is to develop first-order methods for smooth convex problems with intermediate speed and intermediate sensitivity to errors. We develop a general family of first-order methods, the Intermediate Gradient Method (IGM), based on two sequences of coefficie...

متن کامل

Without-Replacement Sampling for Stochastic Gradient Methods

Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled with replacement. In contrast, sampling without replacement is far less understood, yet in practice it is very common, often easier to implement, and usually performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling...

متن کامل

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...

متن کامل

Inexact Spectral Projected Gradient Methods on Convex Sets

A new method is introduced for large scale convex constrained optimization. The general model algorithm involves, at each iteration, the approximate minimization of a convex quadratic on the feasible set of the original problem and global convergence is obtained by means of nonmonotone line searches. A specific algorithm, the Inexact Spectral Projected Gradient method (ISPG), is implemented usi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: INFORMS journal on optimization

سال: 2022

ISSN: ['2575-1484', '2575-1492']

DOI: https://doi.org/10.1287/ijoo.2022.0073